Technology terms are used to communicate information accurately in the field of science and technology. Automatically recognizing technology terms from text can help experts and the public to discover, recognize, and apply new technologies, which is great of value, but unsupervised technology term recognition methods still have some limitations, such as complex rules and poor adaptability. To enhance the ability to recognize technology terms from text, an unsupervised technology term recognition method was proposed. Firstly, a syntactic structure tree was constructed through constituency parsing. Then, the candidate technology terms were extracted from both top-down and bottom-up perspectives. Finally, the statistical frequency and semantic information were combined to determine the most appropriate technology terms. Besides, a technology term dataset was constructed to validate the effectiveness of the proposed method. Experimental results on the proposed dataset show that the proposed method with top-down extraction has the F1 score improved by 4.55 percentage points compared to the dependency-based method. Meanwhile, the analysis results conducted on case study in the field of 3D printing show that the recognized technology terms by the proposed method are in line with the development of the field, which can be used to trace the development process of technology and depict the evolution path of technology, so as to provide references for understanding, discovering, and exploring future technologies of the field.
In view of the low contrast, poor details and low color saturation of low-illumination images, by analyzing the non-linear relationship between the subjective brightness sensation of the human eye and the transmission characteristics of the receptive field in the retinal ganglion cells of the human eye, a bionic image enhancement algorithm combining top-hat transformation and bottom-hat transformation was proposed. Firstly, the RGB (Red, Green, Blue) color space of low-illumination image was converted into HSV (Hue, Saturation, Value) space, and the global brightness logarithmic transformation was performed on the brightness component. Secondly, the retinal neuron receptive wild tri-Gaussian model was used to adjust the contrast of the local edge of the image. Finally, top-hat transformation and bottom-hat transformation were used to assist the extraction of background with high brightness. The experimental results show that the low-illumination images enhanced by the proposed algorithm have clear details and high contrast, without the problems of uneven illumination and image depth of field in the images captured by the device. These enhanced images have high color saturation and strong visual sensation effect.